24 research outputs found

    Implementing Augmented Reality (AR) on Phonics-based Literacy among Children with Autism

    Get PDF
    The implementation of Augmented Reality (AR) technology in education has created an interesting approach to enhance the effectiveness and attractiveness of teaching and learning for students in real-life scenarios. This medium offers unique affordances, combining physical and virtual worlds, with continuous and implicit user control of the point of view and interactivity. This paper introduces the technology of augmented reality and its capabilities in facilitating children with autism. AR is a technology that overlays digital information on a live view of the physical world to create a blended experience. AR provides unique experiences and opportunities to learn and interact with information in the physical world. Hence, AR can be one of the effective technologies available in developing tools for teaching and learning with the combination of the virtual world and real objects such as transportation, fruits, numbers, and alphabets. This will facilitate the autistic child to recognize the abstract concepts of the real objects and their descriptions. The purpose of this paper was to investigate the use of AR on mobile devices in fostering literacy in academic and learning skills for children with autism using the phonics learning method. This prototype helps the autistic child to capture and associate the graphics or pictures of the surrounding objects so as improving the literacy and learning skills of the children. The results show that the children can pronounce and to distinguish between vowels “a”, “i” and “u”. The children are also able to answer most of the questions in the exercises provided. The interactivity between the children and the application raises their attention and focus mainly in literacy and learning skills

    Rapid assembly lines model building based on template approach and classification of problems using the cladistics technique

    Get PDF
    Competition in the global economic scenario has led to the use of simulation in many areas such as manufacturing, health systems, military systems and transportation. With the importance of simulation in supporting decision making and operations, model building has been recognised as one of the crucial steps in simulation studies. However, model building is not as easy as it may seem. It can be time-consuming and expensive, and requires special training, skills and experience. This research, therefore, aims to investigate a new method to rapidly build a simulation model based on the classification of problems in assembly lines using a cladistics technique and template approach. Three objectives were established in order to achieve the aim and a four-stage research programme was developed according to these objectives. The first stage starts by developing a thorough understanding of and collecting typical problems in assembly lines. The next stage formulates the classification of problems and the main deliverable is a cladogram, a tree structure that can be used to represent the evolution of problems and their characteristics. The third stage focuses on the development of a proof-of-concept prototype based on an established classification and template approach. The prototype helps users to develop a model by providing the physical elements and specific elements required for the performance measures analysis. The prototype is then tested and validated in the final stage. The results show that the prototype developed can help to rapidly build a simulation model and reduce model development time.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Augmented Reality based 3D Human Hands Tracking from Monocular True Images Using Convolutional Neural Network

    Get PDF
    Precise modeling of hand tracking from monocular moving camera calibration parameters using semantic cues is an active area of research concern for the researchers due to lack of accuracy and computational overheads. In this context, deep learning based framework, i.e. convolutional neural network based human hands tracking as well as recognizing pose of hands in the current camera frame become active research problem. In addition, tracking based on monocular camera needs to be addressed due to updated technology such as Unity3D engine and other related augmented reality plugins. This research aims to track human hands in continuous frame by using the tracked points to draw 3D model of the hands as an overlay in the original tracked image. In the proposed methodology, Unity3D environment was used for localizing hand object in augmented reality (AR). Later, convolutional neural network was used to detect hand palm and hand keypoints based on cropped region of interest (ROI). Proposed method by this research achieved accuracy rate of 99.2% where single monocular true images were used for tracking. Experimental validation shows the efficiency of the proposed methodology.Peer reviewe

    Comparison of Human Pilot (Remote) Control Systems in Multirotor Unmanned Aerial Vehicle Navigation

    Get PDF
    This paper concerns about the human pilot or remote control system in UAV navigation. Demands for Unmanned Aerial Vehicle (UAV) are increasing tremendously in aviation industry and research area. UAV is a flying machine that can fly with no pilot onboard and can be controlled by ground-based operators. In this paper, a comparison was made between different proposed remote control systems and devices to navigate multirotor UAV, like hand-controllers, gestures and body postures techniques, and vision-based techniques. The overall reviews discussed in this paper have been studied in various research sources related to UAV and its navigation system. Every method has its pros and cons depends on the situation. At the end of the study, those methods will be analyzed and the best method will be chosen in term of accuracy and efficiency

    Vision based 3D Gesture Tracking using Augmented Reality and Virtual Reality for Improved Learning Applications

    Get PDF
    3D gesture recognition and tracking based augmented reality and virtual reality have become a big interest of research because of advanced technology in smartphones. By interacting with 3D objects in augmented reality and virtual reality, users get better understanding of the subject matter where there have been requirements of customized hardware support and overall experimental performance needs to be satisfactory. This research investigates currently various vision based 3D gestural architectures for augmented reality and virtual reality. The core goal of this research is to present analysis on methods, frameworks followed by experimental performance on recognition and tracking of hand gestures and interaction with virtual objects in smartphones. This research categorized experimental evaluation for existing methods in three categories, i.e. hardware requirement, documentation before actual experiment and datasets. These categories are expected to ensure robust validation for practical usage of 3D gesture tracking based on augmented reality and virtual reality. Hardware set up includes types of gloves, fingerprint and types of sensors. Documentation includes classroom setup manuals, questionaries, recordings for improvement and stress test application. Last part of experimental section includes usage of various datasets by existing research. The overall comprehensive illustration of various methods, frameworks and experimental aspects can significantly contribute to 3D gesture recognition and tracking based augmented reality and virtual reality.Peer reviewe

    A comprehensive review towards segmentation and detection of cancer cell and tumor for dynamic 3D reconstruction

    Get PDF
    Automated cancer cell and tumor segmentation and detection for 3D modeling are still an unsolved research problem in computer vision, image processing and pattern recognition research domains. Human body is complex three-dimensional structure where numerous types of cancer and tumor may exist regardless of shape or position. A three-dimensional (3D) reconstruction of cancer cell and tumor from body parts does not lead to loss of information like 2D shape visualization. Various research methodologies for segmentation and detection for 3D reconstruction of cancer cell and tumor were performed by previous research. However, the pursuit for better methodology for segmentation and detection for 3D reconstruction of cancer cell and tumor are still unsolved research problem due to lack of efficient feature extraction for details surface information, misclassification during training phases and low tissue contrast which causes low detection and precision rate with high computational complexity during detection and segmentation. This research addresses comprehensive and critical review of various segmentation and detection research methodologies for cancer affected cell and tumor in human body in the basis of three-dimensional reconstruction from MRI or CT images. At first, core research background is illustrated highlighting various aspects addressed by this research. After that, various previous methods with advantages and disadvantages followed by various phases used as frameworks exist in the previous research demonstrated by this research. Then, extensive experimental evaluations done by previous research are demonstrated by this research with various performance metrics. At last, this research summarized overall observation on previous research categorized into two aspects, i.e. observation on common research methodologies and recommended area for further research. Overall reviews proposed in this paper have been extensively studied in various research papers which can significantly contribute to computer vision research and can be potential for future development and direction for future research

    Implementing Smart Mobile Application to Achieve a Sustainable Campus

    Get PDF
    The use of technology nowadays is not an unusual matter especially in universities. The use of computer technology and the advanced networking that creates an integrated services and enables communication among the community of universities is a concept in a digital campus. However, in the education sector, the digital campus needs to be built as an essential foundation for the modernization of education and to facilitate a better campus life. This paper presents the concept and framework of Digital Campus as part of the universitys challenges in fostering an economical and socially sustainable campus. The implementation of co-created value in framework increases the efficiency of the digital campus concept. The use of mobile applications is very appropriate for achieving campus sustainability because of its increasing significant and ability. UKM Mobile Application as part of Digital Campus strategy is seen to be able to contribute to the sustainability of the campus. We also present the implementation of Digital Campus based on smart card usage in the applications such as attendance system, canteen payment and academic services. Mobile application development embedding value co-creation can improve the quality of service in terms of good communication with customers and the best services to the university

    A Framework to Visualize 3D Breast Tumor Using X-Ray Vision Technique in Mobile Augmented Reality

    Get PDF
    Breast cancer patients who require breast biopsy have increased over the past years and Stereotactic Biopsy uses series of images to carefully position the imaging equipment and target the area of concern. However, it has the constraint of accurate 3D Tumor visualization. An Augmented Reality (AR) Guidance Biopsy system of breast has become the method of choice for researchers, yet this AR tumor visualization has limitation to the extent of superimposing the 3D Imaging Data only. In this paper, a framework to visualize 3D breast tumor technique is being introduced to accurately visualize 3D tumor to see through the skin of US-9 Opaque breast phantom on a mobile display. This mobile AR visualization technique consists of 4 phases where it initially acquires the image from Computed Tomography (CT) or Magnetic Resonance Images (MRI) and processes the medical images into 3D slices, secondly, it will purify these 3D grayscale slices into 3D breast tumor model using 3D modeling reconstruction technique. Furthermore, in visualization processing, this virtual 3D breast tumor model is enhanced using X-Ray Visualization technique to only see through the skin of the phantom for better visualization. Finally, the composition of it is displayed on a smartphone device with an optimized accuracy of the 3D tumor visualization in a six degree of freedom (6DOF). The experiment was made to test the visualization accuracy on US-9 breast phantom which has 12 tumors in different sizes and categorized in 3 levels. Our frame shows the 3D tumor visualization accuracy, however, the accuracy comparison is pending. The two radiologists from Hospital Serdang performed successful visualization of a 3D tumor in an X-ray vision. The framework is perceived as an improved visualization experience because the AR X-ray visualization allowed direct understanding of the breast tumor beyond the visible surface towards accurate biopsy targets

    A framework to visualize 3D breast tumor using x-ray vision technique in mobile augmented reality

    Get PDF
    Breast cancer patients who require breast biopsy have increased over the past years and Stereotactic Biopsy uses series of images to carefully position the imaging equipment and target the area of concern. However, it has the constraint of accurate 3D Tumor visualization. An Augmented Reality (AR) Guidance Biopsy system of breast has become the method of choice for researchers, yet this AR tumor visualization has limitation to the extent of superimposing the 3D Imaging Data only. In this paper, a framework to visualize 3D breast tumor technique is being introduced to accurately visualize 3D tumor to see through the skin of US-9 Opaque breast phantom on a mobile display. This mobile AR visualization technique consists of 4 phases where it initially acquires the image from Computed Tomography (CT) or Magnetic Resonance Images (MRI) and processes the medical images into 3D slices, secondly, it will purify these 3D grayscale slices into 3D breast tumor model using 3D modeling reconstruction technique. Furthermore, in visualization processing, this virtual 3D breast tumor model is enhanced using X-Ray Visualization technique to only see through the skin of the phantom for better visualization. Finally, the composition of it is displayed on a smartphone device with an optimized accuracy of the 3D tumor visualization in a six degree of freedom (6DOF). The experiment was made to test the visualization accuracy on US-9 breast phantom which has 12 tumors in different sizes and categorized in 3 levels. Our frame shows the 3D tumor visualization accuracy, however, the accuracy comparison is pending. The two radiologists from Hospital Serdang performed successful visualization of a 3D tumor in an X-ray vision. The framework is perceived as an improved visualization experience because the AR X-ray visualization allowed direct understanding of the breast tumor beyond the visible surface towards accurate biopsy targets
    corecore